Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Loop-level speculative parallelism analysis of kernel program in TACLeBench
MENG Huiling, WANG Yaobin, LI Ling, YANG Yang, WANG Xinyi, LIU Zhiqin
Journal of Computer Applications    2021, 41 (9): 2652-2657.   DOI: 10.11772/j.issn.1001-9081.2020111792
Abstract258)      PDF (1190KB)(219)       Save
Thread-Level Speculation (TLS) technology can tap the parallel execution potential of programs and improve the utilization of multi-core resources. However, the current TACLeBench kernel benchmarks are not effectively analyzed in TLS parallelization. In response to this problem, the loop-level speculative execution analysis scheme and analysis tool were designed. With 7 representative TACLeBench kernel benchmarks selected, firstly, the initialization analysis was performed to the programs, the program hot fragments were selected to insert the loop identifier. Then, the cross-compilation was performed to these fragments, the program speculative thread and the memory address related data were recorded, and the maximun potential of the loop-level parallelism was analyzed. Finally, the program runtime characteristics (thread granularity, parallelizable coverage, dependency characteristics) and the impacts of the source code on the speedup ratio were comprehensively discussed. Experimental results show that:1) this type of programs is suitable for TLS acceleration, compared with serial execution results, under the loop structure speculative execution, the speedup ratios for most programs are above 2, and the highest speedup ratio in them can reach 20.79; 2) by using TLS to accelerate the TACLeBench kernel programs, most applications can effectively make use of 4-core to 16-core computing resources.
Reference | Related Articles | Metrics
Oversampling method for intrusion detection based on clustering and instance hardness
WANG Yao, SUN Guozi
Journal of Computer Applications    2021, 41 (6): 1709-1714.   DOI: 10.11772/j.issn.1001-9081.2020091378
Abstract336)      PDF (1211KB)(508)       Save
Aiming at the problem of low detection efficiency of intrusion detection models due to the imbalance of network traffic data, a new Clustering and instance Hardness-based Oversampling method for intrusion detection (CHO) was proposed. Firstly, the hardness values of the minority data were measured as input by calculating the proportion of the majority class samples in the neighbors of minority class samples. Secondly, the Canopy clustering approach was used to pre-cluster the minority data, and the obtained cluster values were taken as the clustering parameter of K-means++ clustering approach to cluster again. Then, the average hardness and the standard deviation of different clusters were calculated, and the former was taken as the "investigation cost" in the optimum allocation theory of statistics, and the amount of data to be generated in each cluster was determined by this theory. Finally, the "safe" regions in the clusters were further identified according to the hardness values, and the specified amount of data was generated in the safe regions in the clusters by using the interpolation method. The comparative experiment was carried out on 6 open intrusion detection datasets. The proposed method achieves the optimal values of 1.33 on both Area Under Curve (AUC) and Geometric mean (G-mean), and has the AUC increased by 1.6 percentage points on average compared to Synthetic Minority Oversampling TEchnique (SMOTE) on 4 of the 6 datasets. The experimental results show that the proposed method can be well applied to imbalance problems in intrusion detection.
Reference | Related Articles | Metrics
Remaining useful life prediction of DA40 aircraft carbon brake pads based on bidirectional long short-term memory network
XU Meng, WANG Yakun
Journal of Computer Applications    2021, 41 (5): 1527-1532.   DOI: 10.11772/j.issn.1001-9081.2020071125
Abstract404)      PDF (1636KB)(925)       Save
Aircraft brake pads play a very important role in the process of aircraft braking. It is of great significance to accurately predict the Remaining Useful Life (RUL) of aircraft brake pads for reducing braking faults and saving human and material resources. Aiming at the non-stationary and nonlinear characteristics of the aircraft brake pads wear sequence, a model for predicting the RUL of the aircraft brake pads based on Bidirectional Long Short-Term Memory (BiLSTM) network was proposed, namely VMD-BiLSTM model. Firstly, the method of Variational Mode Decomposition (VMD) was used to decompose the original wear sequence into several sub-sequences with different frequencies and bandwidths to reduce the non-stationarity of the sequence. Then, the BiLSTM neural network prediction models were constructed for the decomposed subsequences. Finally, the prediction values of the sub-sequences were superimposed to obtain the final prediction result of brake pads wear value, so as to realize the life prediction of the brake pads. The simulation results show that the Root Mean Square Error (RMSE) and the Mean Absolute Percentage Error (MAPE) of VMD-BiLSTM model are 0.466 and 0.898% respectively, both of which are better than those of the comparison models, verifying the superiority of VMD-BiLSTM model.
Reference | Related Articles | Metrics
Data augmentation method based on improved deep convolutional generative adversarial networks
GAN Lan, SHEN Hongfei, WANG Yao, ZHANG Yuejin
Journal of Computer Applications    2021, 41 (5): 1305-1313.   DOI: 10.11772/j.issn.1001-9081.2020071059
Abstract1072)      PDF (1499KB)(1547)       Save
In order to solve the training difficulty of small sample data in deep learning and increase the training efficiency of DCGAN (Deep Convolutional Generative Adversarial Network), an improved DCGAN algorithm was proposed to perform the augmentation of small sample data. In the method, Wasserstein distance was used to replace the loss model in the original model at first. Then, spectral normalization was added in the generation network, and discrimination network to acquire a stable network structure. Finally, the optimal noise input dimension of sample was obtained by the maximum likelihood estimation and experimental estimation, so that the generated samples became more diversified. Experimental result on three datasets MNIST, CelebA and Cartoon indicated that the improved DCGAN could generate samples with higher definition and recognition rate compared to that before improvement. In particular, the average recognition rate on these datasets were improved by 8.1%, 16.4% and 16.7% respectively, and several definition evaluation indices on the datasets were increased with different degrees, suggesting that the method can realize the small sample data augmentation effectively.
Reference | Related Articles | Metrics
Image segmentation model without initial contour
LUO Qin, WANG Yan
Journal of Computer Applications    2021, 41 (4): 1179-1183.   DOI: 10.11772/j.issn.1001-9081.2020071058
Abstract296)      PDF (4070KB)(325)       Save
In order to enhance the robustness to initial contour as well as improve the segmentation efficiency for images with intensity inhomogeneity or noise, a region-based active contour model was proposed. First, a global intensity fitting force and a local intensity fitting force were designed separately. Then, the model's fitting term was obtained by the linear combination. And the weight between the two fitting forces were adjusted to improve the robustness of the model to the initial contour. Finally, the length term of evolution curve was employed to keep the smoothness of the curve. Experimental results show that compared with Region-Scalable Fitting(RSF) model and Selective Local or Global Segmentation(SLGS) model, the proposed model has the number of iterations reduced by about 57% and 31%, and the segmentation time reduced by about 62% and 14%. The proposed model can quickly and accurately segment noisy images and images with intensity inhomogeneity without initial contour. Besides, it has good segmentation effect on some practical images such as medical images and infrared images.
Reference | Related Articles | Metrics
Analysis of double-channel Chinese sentiment model integrating grammar rules
QIU Ningjia, WANG Xiaoxia, WANG Peng, WANG Yanchun
Journal of Computer Applications    2021, 41 (2): 318-323.   DOI: 10.11772/j.issn.1001-9081.2020050723
Abstract408)      PDF (1093KB)(1050)       Save
Concerning the problem that ignoring the grammar rules reduces the accuracy of classification when using Chinese text to perform sentiment analysis, a double-channel Chinese sentiment classification model integrating grammar rules was proposed, namely CB_Rule (grammar Rules of CNN and Bi-LSTM). First, the grammar rules were designed to extract information with more explicit sentiment tendencies, and the semantic features were extracted by using the local perception feature of Convolutional Neural Network (CNN). After that, considering the problem of possible ignorance of the context when processing rules, Bi-directional Long Short-Term Memory (Bi-LSTM) network was used to extract the global features containing contextual information, and the local features were fused and supplemented, so that the sentimental feature tendency information of CNN model was improved. Finally, the improved features were input into the classifier to perform the sentiment tendency judgment, and the Chinese sentiment model was constructed. The proposed model was compared with R-Bi-LSTM (Bi-LSTM for Chinese sentiment analysis combined with grammar Rules) and SCNN model (a travel review sentiment analysis model that combines Syntactic rules and CNN) on the Chinese e-commerce review text dataset. Experimental results show that the accuracy of the proposed model is increased by 3.7 percentage points and 0.6 percentage points respectively, indicating that the proposed CB_Rule model has a good classification effect.
Reference | Related Articles | Metrics
Ordinal decision tree algorithm based on fuzzy advantage complementary mutual information
WANG Yahui, QIAN Yuhua, LIU Guoqing
Journal of Computer Applications    2021, 41 (10): 2785-2792.   DOI: 10.11772/j.issn.1001-9081.2020122006
Abstract407)      PDF (1344KB)(459)       Save
When the traditional decision tree algorithm is applied to the ordinal classification task, there are two problems:the traditional decision tree algorithm does not introduce the order relation, so it cannot learn and extract the order structure of the dataset; in real life, there is a lot of fuzzy but not exact knowledge, however the traditional decision tree algorithm cannot deal with the data with fuzzy attribute value. To solve these problems, an ordinal decision tree algorithm based on fuzzy advantage complementary mutual information was proposed. Firstly, the dominant set was used to represent the order relations in the data, and the fuzzy set was introduced to calculate the dominant set for forming a fuzzy dominant set. The fuzzy dominant set was able to not only reflect the order information in the data, but also obtain the inaccurate knowledge automatically. Then, the complementary mutual information was generalized on the basis of fuzzy dominant set, and the fuzzy advantage complementary mutual information was proposed. Finally, the fuzzy advantage complementary mutual information was used as a heuristic method, and an decision tree algorithm based on fuzzy advantage complementary mutual information was designed. Experimental results on 5 synthetic datasets and 9 real datasets show that, the proposed algorithm has less classification errors compared with the classical decision tree algorithm on the ordinal classification tasks.
Reference | Related Articles | Metrics
Fast algorithm for distance regularized level set evolution model
YUAN Quan, WANG Yan, LI Yuxian
Journal of Computer Applications    2020, 40 (9): 2743-2747.   DOI: 10.11772/j.issn.1001-9081.2020010106
Abstract284)      PDF (1693KB)(357)       Save
The gradient descent method has poor convergence and is sensitive to local minimum. Therefore, an improved NAG (Nesterov’s Accelerated Gradient) algorithm was proposed to replace the gradient descent algorithm in the Distance Regularized Level Set Evolution (DRLSE) model, so as to obtain a fast image segmentation algorithm based on NAG algorithm. First, the initial level set evolution equation was given. Second, the gradient was calculated by using the NAG algorithm. Finally, the level set function was updated continuously, avoiding the level set function falling into local minimum. Experimental results show that compared with the original algorithm in the DRLSE model, the proposed algorithm has the number of iterations reduced by about 30%, and the CPU running time reduced by more than 30%. The algorithm is simple to implement, and can be applied to segment the images with high real-time requirement such as infrared images and medical images .
Reference | Related Articles | Metrics
Log analysis and workload characteristic extraction in distributed storage system
GOU Zi'an, ZHANG Xiao, WU Dongnan, WANG Yanqiu
Journal of Computer Applications    2020, 40 (9): 2586-2593.   DOI: 10.11772/j.issn.1001-9081.2020010121
Abstract404)      PDF (1136KB)(665)       Save
Analysis of the workload running on the file system is helpful to optimize the performance of the distributed file system and is crucial to the construction of new storage system. Due to the complexity of workload and the increase of scale diversity, it is incomplete to explicitly capture the characteristics of workload traces by intuition-based analysis. To solve this problem, a distributed log analysis and workload characteristic extraction model was proposed. First, reading and writing related information was extracted from distributed file system logs according to the keywords. Second, the workload characteristics were described from two aspects: statistics and timing. Finally, the possibility of system optimization based on workload characteristics was analyzed. Experimental results show that the proposed model has certain feasibility and accuracy, and can give workload statistics and timing characteristics in detail. It has the advantages of low overhead, high timeliness and being easy to analyze, and can be used to guide the synthesis of workloads with the same characteristics, hot spot data monitoring, and cache prefetching optimization of the system.
Reference | Related Articles | Metrics
Wireless sensor network intrusion detection system based on sequence model
CHENG Xiaohui, NIU Tong, WANG Yanjun
Journal of Computer Applications    2020, 40 (6): 1680-1684.   DOI: 10.11772/j.issn.1001-9081.2019111948
Abstract361)      PDF (656KB)(374)       Save
With the rapid development of Internet of Things (IoT), more and more IoT node devices are deployed, but the accompanying security problem cannot be ignored. Node devices at the network layer of IoT mainly communicate through wireless sensor networks. Compared with the Internet, they are more open and more vulnerable to network attacks such as denial of service. Aiming at the network layer security problem faced by wireless sensor networks, a network intrusion detection system based on sequence model was proposed to detect and alarm the network layer intrusion, which achieved higher recognition rate and lower false positive rate. Besides, aiming at the security problem of the node host device faced by wireless sensor network node devices, with the consideration of the node overhead, a host intrusion detection system based on simple sequence model was proposed. The experimental results show that, the two intrusion detection systems for the network layer and the host layer of wireless sensor network both have the accuracy more than 99%, and the false detection rate about 1%, which meet the industrial requirements. These two proposed systems can comprehensively and effectively protect the wireless sensor network security.
Reference | Related Articles | Metrics
Improved fuzzy c-means MRI segmentation based on neighborhood information
WANG Yan, HE Hongke
Journal of Computer Applications    2020, 40 (4): 1196-1201.   DOI: 10.11772/j.issn.1001-9081.2019091539
Abstract419)      PDF (1675KB)(365)       Save
In the segmentation of brain image,the image quality is often reduced due to the influence of noise or outliers. And traditional fuzzy clustering has some limitations and is easily affected by the initial value,which brings great trouble for doctors to accurately identify and extract brain tissue. Aiming at these problems,an improved fuzzy clustering image segmentation method based on neighborhoods of image pixels constructed by Markov model was proposed. Firstly,the initial clustering center was determined by Genetic Algorithm(AG). Secondly,the expression of the target function was changed,the calculation method of the membership matrix was changed by adding the correction term in the target function and was adjusted by the constraint coefficient. Finally,the Markov Random Field(WRF)was used to represent the label information of the neighborhood pixels,and the maximized conditional probability of Markov random field was used to represent the neighborhood of the pixel,which improves the noise immunity. Experimental results show that the proposed method has good noise immunity,it can reduce the false segmentation rate and has high segmentation accuracy when used to segment brain images. The average accuracy of the segmented image has Jaccard Similarity(JS)index of 82. 76%,Dice index of 90. 45%,and Sensitivity index of 90. 19%. At the same time,the segmentation of brain image boundaries is clearer and the segmented image is closer to the standard segmentation image.
Reference | Related Articles | Metrics
Speech enhancement algorithm based on MMSE spectral subtraction with Laplacian distribution
WANG Yongbiao, ZHANG Wenxi, WANG Yahui, KONG Xinxin, LYU Tong
Journal of Computer Applications    2020, 40 (3): 878-882.   DOI: 10.11772/j.issn.1001-9081.2019071152
Abstract476)      PDF (1053KB)(375)       Save
A Minimum Mean Square Error (MMSE) speech enhancement algorithm based on Laplacian distribution was proposed to solve the problem of noise residual and speech distortion of speech enhanced by the spectral subtraction algorithm based on Gaussian distribution. Firstly, the original noisy speech signal was framed and windowed, and the Fourier transform was performed on the signal of each processed frame to obtain the Discrete-time Fourier Transform (DFT) coefficient of short-term speech. Secondly, the noisy frame detection was performed to update the noise estimation by calculating the logarithmic spectrum energy and spectral flatness of each frame. Thirdly, based on the assumption of Laplace distribution of speech DFT coefficient, the optimal spectral subtraction coefficient was derived under the MMSE criterion, and the spectral subtraction with the obtained coefficient was performed to obtain the enhanced signal spectrum. Finally, the enhanced signal spectrum was subjected to inverse Fourier transform and framing to obtain the enhanced speech. The experimental results show that the Signal-to-Noise Ratio (SNR) of the speech enhanced by the proposed algorithm is increased by 4.3 dB on average, and has 2 dB improvement compared with that of the speech enhanced by the over-subtraction method. In the term of Perceptual Evaluation of Speech Quality (PESQ) score, compared with that of over-subtraction method, the average score of the proposed algorithm has a 10% improvement. The proposed algorithm has better noise suppression and less speech distortion, and has a significant improvement in SNR and PESQ evaluation standards.
Reference | Related Articles | Metrics
Wireless sensor deployment optimization based on improved IHACA-CpSPIEL algorithm
DUAN Yujun, WANG Yaoli, CHANG Qing, LIU Xing
Journal of Computer Applications    2020, 40 (3): 793-798.   DOI: 10.11772/j.issn.1001-9081.2019071201
Abstract312)      PDF (747KB)(284)       Save
Aiming at the problems of low coverage and high communication cost for wireless sensor deployment, an Improved Heuristic Ant Colony Algorithm (IHACA) merging Chaos optimization of padded Sensor Placements at Informative and cost-Effective Locations algorithm (IHACA-CpSPIEL) method for sensor deployment was proposed. Firstly, the correlation between observation points and unobserved points was established by mutual information, and the communication cost was described in the form of graph theory to establish the mathematical model with submodularity. Secondly, chaos operator was introduced to improve the global searching ability of pSPIEL (padded Sensor Placements at Informative and cost-Effective Locations) algorithm for local parameters, and then the optimal number of clusters was found. Then, the factors of the colony distance heuristic function and the pheromone updating mechanism were changed to jump out of the local solution of communication cost. Finally, Chaos optimization of pSPIEL algorithm (CpSPIEL) was integrated with the IHACA to determine the shortest path, so as to achieve the purpose of low-cost deployment. The experimental results show that the proposed algorithm can jump out of the local optimal solution well, and the communication cost is reduced by 6.5% to 24.0% compared with the pSPIEL algorithm, and has a faster search speed.
Reference | Related Articles | Metrics
Human activity recognition based on improved particle swarm optimization-support vector machine and context-awareness
WANG Yang, ZHAO Hongdong
Journal of Computer Applications    2020, 40 (3): 665-671.   DOI: 10.11772/j.issn.1001-9081.2019091551
Abstract379)      PDF (754KB)(320)       Save
Concerning the problem of low accuracy of human activity recognition, a recognition method combining Support Vector Machine (SVM) with context-awareness (actual logic or statistical model of human motion state transition) was proposed to identify six types of human activities (walking, going upstairs, going downstairs, sitting, standing, lying). Logical relationships existing between human activity samples were used by the method. Firstly, the SVM model was optimized by using the Improved Particle Swarm Optimization (IPSO) algorithm. Then, the optimized SVM was used to classify the human activities. Finally, the context-awareness was used to correct the error recognition results. Experimental results show that the classification accuracy of the proposed method reaches 94.2% on the Human Activity Recognition Using Smartphones (HARUS) dataset of University of California, Irvine (UCI), which is higher than that of traditional classification method based on pattern recognition.
Reference | Related Articles | Metrics
General chess piece positioning method under uneven illumination
WANG Yajie, ZHANG Yunbo, WU Yanyan, DING Aodong, QI Bingzhi
Journal of Computer Applications    2020, 40 (12): 3490-3498.   DOI: 10.11772/j.issn.1001-9081.2020060892
Abstract319)      PDF (3060KB)(255)       Save
Focusing on the problem of chess piece positioning error in the chess robot system under uneven illumination distribution, a general chess piece positioning method based on block convex hull detection and image mask was proposed. Firstly, the set of points on the outline of the chessboard were extracted, the coordinates of the four vertices of the chessboard were detected using the block convex hull method. Secondly, the coordinates of the four vertices of the chessboard in the standard chessboard image were defined, and the transformation matrix was calculated by the perspective transformation principle. Thirdly, the type of the chessboard was recognized based on the difference between the small square areas of different chessboards. Finally, the captured chessboard images were successively corrected to the standard chessboard images, and the difference images of two adjacent standard chessboard images were obtained, then the dilation, image mask multiplication and erosion operations were performed on the difference images in order to obtain the effective areas of chess pieces and calculate their center coordinates. Experimental results demonstrate that, the proposed method has the average positioning accuracy of Go and Chinese chess pieces arrived by 95.5% and 99.06% respectively under four kinds of uneven illumination conditions, which are significantly improved in comparison with other chess piece positioning algorithms. At the same time, the proposed method can solve the inaccurate local positioning problem of chess pieces caused by adhesion of chess pieces, chess piece projection and lens distortion.
Reference | Related Articles | Metrics
Semi-supervised learning method for automatic nuclei segmentation using generative adversarial network
CHENG Kai, WANG Yan, LIU Jianfei
Journal of Computer Applications    2020, 40 (10): 2917-2922.   DOI: 10.11772/j.issn.1001-9081.2020020136
Abstract487)      PDF (3833KB)(475)       Save
In order to reduce the dependence on the number of labeled images, a novel semi-supervised learning method was proposed for automatic segmentation of nuclei. Firstly, a novel Convolutional Neural Network (CNN) was used to extract the cell region from the background. Then, a confidence map for the input image was generated by the discriminator network via applying a full convolutional network. At the same time, the adversarial loss and the standard cross-entropy loss were coupled to improve the performance of the segmentation network. Finally, the labeled images and unlabeled images were combined with the confidence maps to train the segmentation network, so that the segmentation network was able to identify the nuclei in the extracted cell regions. Experimental results on 84 images (1/8 of the total images in the training set were labeled, and the rest were unlabeled) showed that the SEGmentation accuracy measurement (SEG) score of the proposed nuclei segmentation method achieved 77.9% and F1 score of the method was 76.0%, which were better than those of the method when using 670 images (all images in the training set were labeled).
Reference | Related Articles | Metrics
Adaptive intensity fitting model for segmentation of images with intensity inhomogeneity
ZHANG Xuyuan, WANG Yan
Journal of Computer Applications    2019, 39 (9): 2719-2725.   DOI: 10.11772/j.issn.1001-9081.2019020364
Abstract431)      PDF (1104KB)(322)       Save

For the segmentation of images with intensity inhomogeneity, a region-adaptive intensity fitting model combining global information was proposed. Firstly, the local and global terms were constructed based on local and global image information respectively. Secondly, an adaptive weight function was defined to indicate the deviation degree of the gray scale of a pixel neighborhood by utilizing the extreme difference level in the pixel neighborhood. Finally, the defined weighting function was used to assign weights to local and global terms adaptively to obtain the energy functional of the proposed model and the iterative equation of the model's level set function was deduced by the variational method. The experimental results show that the proposed model can segment various inhomogeneous images stably and accurately in comparison with Region-Scalable Fitting (RSF) model and Local and Global Intensity Fitting (LGIF) model, which is more robust in the position, size and shape of initial contour of evolution curve.

Reference | Related Articles | Metrics
Biogeography-based optimization algorithms based on improved migration rate models
WANG Yaping, ZHANG Zhengjun, YAN Zihan, JIN Yazhou
Journal of Computer Applications    2019, 39 (9): 2511-2516.   DOI: 10.11772/j.issn.1001-9081.2019020325
Abstract529)      PDF (713KB)(262)       Save

Biogeography-Based Optimization (BBO) algorithm updates habitats through migration and mutation continuously to find the optimal solution, and the migration model affects the performance of the algorithm significantly. In view of the problem of insufficient adaptability of the linear migration model used in the original BBO algorithm, three nonlinear migration models were proposed. These models are based on Logistic function, cubic polynomial function and hyperbolic tangent function respectively. Optimization experiments were carried out on 17 typical benchmark functions, and results show that the migration model based on hyperbolic tangent function performs better than the linear migration model of original BBO algorithm and cosine migration model with good performance of improved algorithm. Stability test shows that the migration model based on hyperbolic tangent function performs better than the original linear migration model with different mutation rates on most test functions. The model satisfies the diversity of the solutions, and better adapts to the nonlinear migration problem with improved search ability.

Reference | Related Articles | Metrics
Real-time face recognition on ARM platform based on deep learning
FANG Guokang, LI Jun, WANG Yaoru
Journal of Computer Applications    2019, 39 (8): 2217-2222.   DOI: 10.11772/j.issn.1001-9081.2019010164
Abstract1340)      PDF (958KB)(604)       Save
Aiming at the problem of low real-time performance of face recognition and low face recognition rate on ARM platform, a real-time face recognition method based on deep learning was proposed. Firstly, an algorithm for detecting and tracking faces in real time was designed based on MTCNN face detection algorithm. Then, a face feature extraction network was designed based on Residual Neural Network (ResNet) on ARM platform. Finally, according to the characteristics of ARM platform, Mali-GPU was used to accelerate the operation of face feature extraction network, sharing the CPU load and improving the overall running efficiency of the system. The algorithm was deployed on ARM-based Rockchip development board, and the running speed reaches 22 frames per second. Experimental results show that the recognition rate of this method is 11 percentage points higher than that of MobileFaceNet on MegaFace.
Reference | Related Articles | Metrics
Clone code detection based on image similarity
WANG Yafang, LIU Dongsheng, HOU Min
Journal of Computer Applications    2019, 39 (7): 2074-2080.   DOI: 10.11772/j.issn.1001-9081.2019010083
Abstract553)      PDF (1041KB)(305)       Save

At present, scholars mainly focus on four perspectives of text, vocabulary, grammar and semantics in the field of clone code detection. However, few breakthroughs have been made in the effect of clone code detection for a long time. In view of this problem, a new method called Clone Code detection based on Image Similarity (CCIS) was proposed. Firstly, the source code was preprocessed by removing comments, white space, etc., from which a "clean" function fragment was able to be obtained, and the identifiers, keywords, etc. in the function were highlighted. Then the processed source code was converted into images and these images were normalized. Finally, Jaccard distance and perceptual Hash algorithm were used for detection, obtaining the clone code information from these images. In order to verify the validity of this method, six open source softwares were used to constitute the evaluation dataset for testing. The experimental results show that CCIS method can detect 100% type-1 clone code, 88% type-2 clone code and 60% type-3 clone code, which proves the good effect of CCIS method on clone code detection.

Reference | Related Articles | Metrics
Segmentation of nasopharyngeal neoplasms based on random forest feature selection algorithm
LI Xian, WANG Yan, LUO Yong, ZHOU Jiliu
Journal of Computer Applications    2019, 39 (5): 1485-1489.   DOI: 10.11772/j.issn.1001-9081.2018102205
Abstract391)      PDF (796KB)(345)       Save
Due to the low grey-level contrast and blurred boundaries of organs in medical images, a Random Forest (RF) feature selection algorithm was proposed to segment nasopharyngeal neoplasms MR images. Firstly, gray-level, texture and geometry information was extracted from nasopharyngeal neoplasms images to construct a random forest classifier. Then, feature importances were measured by the random forest, and the proposed feature selection method was applied to the original handcrafted feature set. Finally, the optimal feature subset obtained from the feature selection process was used to construct a new random forest classifier to make the final segmentation of the images. Experimental results show that the performances of the proposed algorithm are:dice coefficient 79.197%, accuracy 97.702%, sensitivity 72.191%, and specificity 99.502%. By comparing with the conventional random forest based and Deep Convolution Neural Network (DCNN) based segmentation algorithms, it is clearly that the proposed feature selection algorithm can effectively extract useful information from the nasopharyngeal neoplasms MR images and improve the segmentation accuracy of nasopharyngeal neoplasms under small sample circumstance.
Reference | Related Articles | Metrics
Downlink resource scheduling based on weighted average delay in long term evolution system
WANG Yan, MA Xiurong, SHAN Yunlong
Journal of Computer Applications    2019, 39 (5): 1429-1433.   DOI: 10.11772/j.issn.1001-9081.2018081734
Abstract471)      PDF (738KB)(243)       Save
Aiming at the transmission performance requirements of Real-Time (RT) services and Non-Real-Time (NRT) services for multi-user in the downlink transmission of Long Term Evolution (LTE) mobile communication system, an improved Modified Largest Weighted Delay First (MLWDF) scheduling algorithm based on weighted average delay was proposed. On the basis of considering both channel perception and Quality of Service (QoS) perception, a weighted average dealy factor reflecting the state of the user buffer was utilized, which was obtained by the average delay balance of the data to be transmitted and the transmitted data in the user buffer. The RT service with large delay and traffic is prioritized, which improves the user performance experience.Theoretical analysis and link simulation show that the proposed algorithm improves the QoS performance of RT services on the basis of ensuring the delay and fairness of each service. The packet loss rate of RT service of the proposed algorithm decreased by 53.2%, and the average throughput of RT traffic increased by 44.7% when the number of users achieved 50 compared with MLWDF algorithm. Although the throughput of NRT services are sacrificed, it is still better than VT-MLWDF (Virtual Token MLWDF) algorithm. The theoretical analysis and simulation results show that transmission performances and QoS are superior to the comparison algorithm.
Reference | Related Articles | Metrics
Automatic segmentation of nasopharyngeal neoplasm in MR image based on U-net model
PAN Peike, WANG Yan, LUO Yong, ZHOU Jiliu
Journal of Computer Applications    2019, 39 (4): 1183-1188.   DOI: 10.11772/j.issn.1001-9081.2018091908
Abstract432)      PDF (970KB)(323)       Save
Because of the uncertain growth direction and complex anatomical structure for nasopharyngeal tumors, doctors always manually delineate the tumor regions in MR images, which is time-consuming and the delineation result heavily depends on the experience of doctors. In order to solve this problem, based on deep learning algorithm, a U-net based MR image automatic segmentation algorithm of nasopharyngeal tumors was proposed, in which the max-pooling operation in original U-net model was replaced by the convolution operation to keep more feature information. Firstly,the regions of 128×128 were extracted from all slices with tumor regions of the patients as data samples. Secondly, the patient samples were divided into training sample set and testing sample set, and data augmentation was performed on the training samples. Finally, all the training samples were used to train the model. To evaluate the performance of the proposed U-net based model, all slices of patients in testing sample set were selected for segmentation, and the final average results are:Dice Similarity Coefficient (DSC) is 80.05%, Prevent Match (PM) coefficient is 85.7%, Correspondence Ratio (CR) coefficient is 71.26% and Average Symmetric Surface Distance (ASSD) is 1.1568. Compared with Convolutional Neural Network (CNN) based model, DSC, PM and CR coefficients of the proposed method are increased by 9.86 percentage points, 19.61 percentage points and 16.02 percentage points respectively, and ASSD is decreased by 0.4364. Compared with Fully Convolutional Network (FCN) model and max-pooling based U-net model, DSC and CR coefficients of the proposed method achieve the best results, while PM coefficient is 2.55 percentage points lower than the maximum value in the two comparison models, and ASSD is slightly higher than the minimum value of the two comparison models by 0.0046. The experimental results show that the proposed model can achieve good segmentation results of nasopharyngeal neoplasm, which assists doctors in diagnosis.
Reference | Related Articles | Metrics
Object tracking algorithm based on correlation filter with spatial structure information
HU Xiuhua, WANG Changyuan, XIAO Feng, WANG Yawen
Journal of Computer Applications    2019, 39 (4): 1150-1156.   DOI: 10.11772/j.issn.1001-9081.2018091884
Abstract450)      PDF (1190KB)(297)       Save
To solve the tracking drift problem caused by the low discriminability of sample information in typical correlation filtering framework, a correlation filter based object tracking algorithm with spatial structure information was proposed. Firstly, the spatial context structure constraint was introduced to optimize the model construction, meanwhile, the regularized least square and matrix decomposition idea were exploited to achieve the closed solution. Then, the complementary features were used for the target apparent description, and the scale factor pool was utilized to deal with target scale changing. Finally, according to the occlusion influence of target judged by motion continuity, the corresponding model updating strategy was designed. Experimental results demonstrate that compared with the traditional algorithm, the precision of the proposed algorithm is increased by 17.63%, and the success rate is improved by 24.93% in various typical test scenarios, achieving more robust tracking effect.
Reference | Related Articles | Metrics
Non-rigid multi-modal brain image registration by using improved Zernike moment based local descriptor and graph cuts discrete optimization
WANG Lifang, WANG Yanli, LIN Suzhen, QIN Pinle, GAO Yuan
Journal of Computer Applications    2019, 39 (2): 582-588.   DOI: 10.11772/j.issn.1001-9081.2018061423
Abstract360)      PDF (1232KB)(250)       Save
When noise and intensity distortion exist in brain images, the method based on structural information cannot accurately extract image intensity information, edge and texture features at the same time. In addition, the computational complexity of continuous optimization is relatively high. To solve these problems, according to the structural information of the image, a non-rigid multi-modal brain image registration method based on Improved Zernike Moment based Local Descriptor (IZMLD) and Graph Cuts (GC) discrete optimization was proposed. Firstly, the image registration problem was regarded as the discrete label problem of Markov Random Field (MRF), and the energy function was constructed. The two energy terms were composed of the pixel similarity and smoothness of the displacement vector field. Secondly, a smoothness constraint based on the first derivative of the deformation vector field was used to penalize displacement labels with sharp changes between adjacent pixels. The similarity metric based on IZMLD was used as a data item to represent pixel similarity. Thirdly, the Zernike moments of the image patches were used to calculate the self-similarity of the reference image and the floating image in the local neighborhood and construct an effective local descriptor. The Sum of Absolute Difference (SAD) between the descriptors was taken as the similarity metric. Finally, the whole energy function was discretized and its minimum value was obtained by using an extended optimization algorithm of GC. The experimental results show that compared with the registration method based on the Sum of Squared Differences on Entropy images (ESSD), the Modality Independent Neighborhood Descriptor (MIND) and the Stochastic Second-Order Entropy Image (SSOEI), the mean of the target registration error of the proposed method was decreased by 18.78%, 10.26% and 8.89% respectively; and the registration time of the proposed method was shortened by about 20 s compared to the continuous optimization algorithm. The proposed method achieves efficient and accurate registration for images with noise and intensity distortion.
Reference | Related Articles | Metrics
Multi-authority access control scheme with policy hiding of satellite network
WANG Yaqiong, SHI Guozhen, XIE Rongna, LI Fenghua, WANG Yazhe
Journal of Computer Applications    2019, 39 (2): 470-475.   DOI: 10.11772/j.issn.1001-9081.2018081959
Abstract275)      PDF (1000KB)(275)       Save
Satellite network has unique characteristics that differ from traditional networks, such as channel openness, node exposure and limited onboard processing capability. However, existing Ciphertext-Policy Attribute-Based Encryption (CP-ABE) access control is not suitable for the satellite network due to its policy explosion and attribute-based authorization manner. To address this problem, a multi-authority access control scheme with policy hiding of satellite network was proposed. Linear Secret Sharing Scheme (LSSS) matrix access structure was adopted to guarantee data confidentiality and hide the access control policy completely by obfuscating the access structure. In addition, multi-authority was used to achieve fine-grained attribute management, eliminating the performance bottleneck of central authority. Each attribute authority worked independently and generated partial key of the user, which makes it resistant to collusion attacks. The security and performance analysis show that the proposed scheme can satisfy the security requirements of data confidentiality, collusion attack resistance and complete policy hiding, and is more suitable for satellite network than the comparison solutions.
Reference | Related Articles | Metrics
Human interaction recognition based on RGB and skeleton data fusion model
JI Xiaofei, QIN Linlin, WANG Yangyang
Journal of Computer Applications    2019, 39 (11): 3349-3354.   DOI: 10.11772/j.issn.1001-9081.2019040633
Abstract473)      PDF (993KB)(344)       Save
In recent years, significant progress has been made in human interaction recognition based on RGB video sequences. Due to its lack of depth information, it cannot obtain accurate recognition results for complex interactions. The depth sensors (such as Microsoft Kinect) can effectively improve the tracking accuracy of the joint points of the whole body and obtain three-dimensional data that can accurately track the movement and changes of the human body. According to the respective characteristics of RGB and joint point data, a convolutional neural network structure model based on RGB and joint point data dual-stream information fusion was proposed. Firstly, the region of interest of the RGB video in the time domain was obtained by using the Vibe algorithm, and the key frames were extracted and mapped to the RGB space to obtain the spatial-temporal map representing the video information. The map was sent to the convolutional neural network to extract features. Then, a vector was constructed in each frame of the joint point sequence to extract the Cosine Distance (CD) and Normalized Magnitude (NM) features. The cosine distance and the characteristics of the joint nodes in each frame were connected in time order of the joint point sequence, and were fed into the convolutional neural network to learn more advanced temporal features. Finally, the softmax recognition probability matrixes of the two information sources were fused to obtain the final recognition result. The experimental results show that combining RGB video information with joint point information can effectively improve the recognition result of human interaction behavior, and achieves 92.55% and 80.09% recognition rate on the international public SBU Kinect interaction database and NTU RGB+D database respectively, verifying the effectiveness of the proposed model for the identification of interaction behaviour between two people.
Reference | Related Articles | Metrics
Gastric tumor cell image recognition method based on radial transformation and improved AlexNet
GAN Lan, GUO Zihan, WANG Yao
Journal of Computer Applications    2019, 39 (10): 2923-2929.   DOI: 10.11772/j.issn.1001-9081.2019040709
Abstract293)      PDF (1200KB)(236)       Save
When using AlexNet to implement image classification of gastric tumor cells, there are problems of small dataset, slow model convergence and low recognition rate. Aiming at the above problems, a Data Augmentation (DA) method based on Radial Transformation (RT) and improved AlexNet was proposed. The original dataset was divided into test set and training set. In the test set, cropping was used to increase the data. In the training set, cropping, rotation, flipping and brightness conversion were employed to obtain the enhanced image set, and then some of them were selected for RT processing to achieve the enhanced effect. In addition, the replacement activation of functions and normalization layers was used to speed up the convergence and improve the generalization performance of AlexNet. Experimental results show that the proposed method can implement the recognition of gastric tumor cell images with faster convergence and higher recognition accuracy. On the test set, the highest accuracy is 99.50% and the average accuracy is 96.69%, and the F1 scores of categories:canceration, normal and hyperplasia are 0.980, 0.954 and 0.958 respectively, indicating that the proposed method can implement the recognition of gastric tumor cell images well.
Reference | Related Articles | Metrics
Forest fire smoke detection model based on deep convolution long short-term memory network
WEI Xin, WU Shuhong, WANG Yaoli
Journal of Computer Applications    2019, 39 (10): 2883-2887.   DOI: 10.11772/j.issn.1001-9081.2019040707
Abstract477)      PDF (774KB)(365)       Save
Since the smoke characteristics of each sampled frame have great similarity, and the forest fire smoke dataset is relatively small and monotonous, in order to make full use of the static and dynamic information of smoke to prevent forest fires, a Deep Convolution Integrated Long Short-Term Memory network (DC-ILSTM) model was proposed. Firstly, VGG-16 networks pre-trained on ImageNet dataset were used for feature transfer based on isomorphic data to effectively extract smoke characteristics. Secondly, an Integrated Long Short-Term Memory network (ILSTM) based on pooling layer and Long Short-Term Memory network (LSTM) was proposed, and ILSTM was used for segmental fusion of smoke characteristics. Finally, a trainable deep neural network model was built for forest fire smoke detection. In the smoke detection experiment, compared with Deep Convolution Long Recursive Network (DCLRN), DC-ILSTM can detect smoke with 10 frames advantage under the optimal efficiency and has the test accuracy increased by 1.23 percentage points. The theoretical analysis and simulation results show that DC-ILSTM has good applicability in forest fire smoke detection.
Reference | Related Articles | Metrics
Image matching method with illumination robustness
WANG Yan, LYU Meng, MENG Xiangfu, LI Yuhao
Journal of Computer Applications    2019, 39 (1): 262-266.   DOI: 10.11772/j.issn.1001-9081.2018061210
Abstract460)      PDF (774KB)(250)       Save
Focusing on the problem that current image matching algorithm based on local feature has low correct rate of illumination change sensitive matching, an image matching algorithm with illumination robustness was proposed. Firstly, a Real-Time Contrast Preserving decolorization (RTCP) algorithm was used for grayscale image, and then a contrast stretching function was used to simulate the influence of different illumination transformation on image to extract feature points of anti-illumination transformation. Finally, a feature point descriptor was established by using local intensity order pattern. According to the Euclidean distance of local feature point descriptor of image to be matched, the Euclidean distance was determined to be a pair matching point. In open dataset, the proposed algorithm was compared with Scale Invariant Feature Transform (SIFT) algorithm, Speeded Up Robust Feature (SURF) algorithm, the "wind" (KAZE) algorithm and ORB (Oriented FAST and Rotated, BRIEF) algorithm in matching speed and accuracy. The experimental results show that with the increase of image brightness difference, SIFT algorithm, SURF algorithm, the "wind" algorithm and ORB algorithm reduce matching accuracy rapidly, and the proposed algorithm decreases matching accuracy slowly and the accuracy is higher than 80%. The proposed algorithm is slower to detect feature points and has a higher descriptor dimension, with an average time of 23.47 s. The matching speed is not as fast as the other four algorithms, but the matching quality is much better than them. The proposed algorithm can overcome the influence of illumination change on image matching.
Reference | Related Articles | Metrics